skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Wang, Chenyi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Cooperative perception (CP) extends detection range and situational awareness in connected and autonomous vehicles by aggregating information from multiple agents. However, attackers can inject fabricated data into shared messages to achieve adversarial attacks. While prior defenses detect object spoofing, object removal attacks remain a serious threat. Nevertheless, prior attacks require unnaturally large perturbations and rely on unrealistic assumptions such as complete knowledge of participant agents, which limits their attack success. In this paper, we present SOMBRA, a stealthy and practical object removal attack exploiting the attentive fusion mechanism in modern CP algorithms. SOMBRA achieves 99% success in both targeted and mass object removal scenarios (a 90%+ improvement over prior art) with less than 1% perturbation strength and no knowledge of benign agents other than the victim. To address the unique vulnerabilities of attentive fusion within CP, we propose LUCIA, a novel trustworthiness-aware attention mechanism that proactively mitigates adversarial features. LUCIA achieves 94.93% success against targeted attacks, reduces mass removal rates by over 90%, restores detection to baseline levels, and lowers defense overhead by 300x compared to prior art. Our contributions set a new state-of-the-art for adversarial attacks and defenses in CP. 
    more » « less
  2. Abstract—Multi-Object Tracking (MOT) is a critical task in computer vision, with applications ranging from surveillance systems to autonomous driving. However, threats to MOT algorithms have yet been widely studied. In particular, incorrect association between the tracked objects and their assigned IDs can lead to severe consequences, such as wrong trajectory predictions. Previous attacks against MOT either focused on hijacking the trackers of individual objects, or manipulating the tracker IDs in MOT by attacking the integrated object detection (OD) module in the digital domain, which are model-specific, non-robust, and only able to affect specific samples in offline datasets. In this paper, we present ADVTRAJ, the first online and physical ID-manipulation attack against tracking-by-detection MOT, in which an attacker uses adversarial trajectories to transfer its ID to a targeted object to confuse the tracking system, without attacking OD. Our simulation results in CARLA show that ADVTRAJ can fool ID assignments with 100% success rate in various scenarios for white-box attacks against SORT, which also have high attack transferability (up to 93% attack success rate) against state-of-the-art (SOTA) MOT algorithms due to their common design principles. We characterize the patterns of trajectories generated by ADVTRAJ and propose two universal adversarial maneuvers that can be performed by a human walker/driver in daily scenarios. Our work reveals under-explored weaknesses in the object association phase of SOTA MOT systems, and provides insights into enhancing the robustness of such systems 
    more » « less
  3. Camera-based perception is a central component to the visual perception of autonomous systems. Recent works have investigated latency attacks against perception pipelines, which can lead to a Denial-of-Service against the autonomous system. Unfortunately, these attacks lack real-world applicability, either relying on digital perturbations or requiring large, unscalable, and highly visible patches that cover up the victim's view. In this paper, we propose Detstorm, a novel physically realizable latency attack against camera-based perception. Detstorm uses projector perturbations to cause delays in perception by creating a large number of adversarial objects. These objects are optimized on four objectives to evade filtering by multiple Non-Maximum Suppression (NMS) approaches. To maximize the number of created objects in a dynamic physical environment, Detstorm takes a unique greedy approach, segmenting the environment into “zones” containing distinct object classes and maximizing the number of created objects per zone. Detstorm adapts to changes in the environment in real time, recombining perturbation patterns via our zone stitching process into a contiguous, physically projectable image. Evaluations in both simulated and real-world experiments show that Detstorm causes a 506% increase in detected objects on average, delaying perception results by up to 8.1 seconds, and capable of causing physical consequences on real-world autonomous driving systems. 
    more » « less